perm filename NARAYA.1[LET,JMC] blob
sn#641710 filedate 1982-02-16 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 @make(letterhead,Phone"497-4430", Who"John McCarthy",Logo Old, Department CSD)
C00010 ENDMK
Cā;
@make(letterhead,Phone"497-4430", Who"John McCarthy",Logo Old, Department CSD)
@style(indent 8)
@begin(address)
Dr. A. Narayanan
Department of Computer Science
Physics Building
Stocker Road
Exeter
EX4 4QL
UNITED KINGDOM
@end(address)
@greeting(Dear Dr. Narayanan:)
@begin(body)
Thanks for the copy of your paper. Unfortunately, it seems to me
that you have gotten involved in a somewhat sterile philosophical controversy
rather than treating issues of importance to computer science (especially AI) -
namely the conditions for ascribing specific mental qualities rather than
the justification for ascribing any at all.
Here are some specific comments on your paper.
1. On page 3 you say "At
the heart of basis X will usually lie a view concerning the concept of a
person". It seems to me that this is a mistake, and the mistake is fundamental
to your paper. A concept of a person will inevitably involve an indefinite
and controversial bundle of mental qualities.
To avoid sterile hairsplitting about which qualities are essential
to the concept of person and which are not, it is necessary to determine
conditions for ascribing individual qualities. This is especially true, because
while many present programs can reasonably be ascribed certain mental
qualities, probably no present program and even no program anyone now knows
how to write is a serious candidate for personhood.
Thus many present programs can be
ascribed some beliefs though not introspective beliefs. However, we can
probably write programs that can be ascribed some introspective belief
at the present state of AI.
Similarly ascribing some mental qualities to many animals doesn't
warrant ascribing them all or a unified concept of person to them. Thus
recent experiments on apes and monkeys looking at mirrors determined that
apes recognize the spots on their foreheads shown in the mirrors and
try to rub them off but monkeys don't. The experimenters concluded,
somewhat rashly in my opinion, that
apes have concepts of self but monkeys don't. Whether or not these
particular experiments are correctly interpreted, the issues are meaningful,
and a clever experimental approach can yield information about what
mental qualities are properly ascribed to what animals.
2. The distinction between program and machine is necessary,
because the same machine can run different programs. Otherwise, it
wouldn't be so necessary to ascribe the mental predicates to the program
rather than the machine.
3. The program-machine distinction was never intended to carry
the burdens that have motivated philosophical dualism -
namely, trying to get around determinism and trying to imagine the soul
persisting after the death of the body. (It seems to me that recent
versions of dualism which don't explicitly have these goals are partly
entangled in the past without recognizing it).
4. In company with many philosophers, you seem to have involved yourself
in a vain search for
some kind of philosophical certainty. Ascription of certain mental
qualities to ourselves and other people and machines requires only the same
kind of probabilistic evidence as does any other scientific assertion.
It is worth noting that the language we use for thinking about our
own mental states comes in a large measure from reading how others
have described their metnal states. The way people introspect is
different in different cultures, and fashions changes with time.
5. While pain cannot reasonably be ascribed to any present
computer program, it is dogmatic to assert that no program can exist
to which pain could reasonably be ascribed. Clearly pain would have
to involve sensation that enters into the motivational structure in
the right way.
6. The statement "Since I cannot in principle enter your mind ..."
seems wrong. Tomorrow the advance of science may permit you
to acquire full knowledge of what is in someone else's mind.
7. On page 8 you say "The computer has no knowledge whatsoever
of its own cleverness". It might if it is so programmed.
8. On page 18 " Notice that there is no question of the computer,
when asked to specify what it is doing, monitoring or observing its own
actions and behaviour in order to make an action-ascription". I don't
see why this can't be programmed to occur.
If you were a mere philosopher, I wouldn't be so disappointed,
but I don't see how such a holistic approach can contribute to computer
science, e.g., to making more intelligent programs and programs to
which more mental qualities can be legitimately ascribed.
I am also somewhat disappointed that you don't deal with any of the
issues discussed in my paper beyond the first sentence.
@end(body)
John McCarthy
Professor of Computer Science